The interaction between cores and memory blocks, in multiprocessor chips and smart systems, has always been a concern as it\naffects network latency, memory capacity, and power consumption. A new 2.5-dimensional architecture has been introduced in\nwhich the communication between the processing elements and the memory blocks is provided through a layer called the\ninterposer. If the core wants to connect to another, it uses the top layer, and if it wants to interact with the memory blocks, it\nuses the interposer layer. In a case that coherence traffic at the processing layer increases to the extent that congestion occurs, a\npart of this traffic may be transferred to the interposer network under a mechanism called load balancing. When coherence\ntraffic is moved to the interposer layer, as an alternative way, this may interfere with memory traffic. This paper introduces a\nmechanism in which the aforementioned interference may be avoided by defining two different virtual channels and using\nmultiple links which specifically determines which memory block is going to be accessed. Our method is based on the\ndestination address to recognize which channel and link should be selected while using the interposer layer. The simulation\nresults show that the proposed mechanism has improved by 32% and 14% latency compared to the traditional load-balancing\nand unbalanced mechanisms, respectively.
Loading....